17 research outputs found

    The emergence of synaesthesia in a Neuronal Network Model via changes in perceptual sensitivity and plasticity

    Get PDF
    Synaesthesia is an unusual perceptual experience in which an inducer stimulus triggers a percept in a different domain in addition to its own. To explore the conditions under which synaesthesia evolves, we studied a neuronal network model that represents two recurrently connected neural systems. The interactions in the network evolve according to learning rules that optimize sensory sensitivity. We demonstrate several scenarios, such as sensory deprivation or heightened plasticity, under which synaesthesia can evolve even though the inputs to the two systems are statistically independent and the initial cross-talk interactions are zero. Sensory deprivation is the known causal mechanism for acquired synaesthesia and increased plasticity is implicated in developmental synaesthesia. The model unifies different causes of synaesthesia within a single theoretical framework and repositions synaesthesia not as some quirk of aberrant connectivity, but rather as a functional brain state that can emerge as a consequence of optimising sensory information processing

    Predicting brain evoked response to external stimuli from temporal correlations of spontaneous activity

    No full text
    The relation between spontaneous and stimulated global brain activity is a fundamental problem in the understanding of brain functions. This question is investigated both theoretically and experimentally within the context of nonequilibrium fluctuation-dissipation relations. We consider the stochastic coarse-grained Wilson-Cowan model in the linear noise approximation and compare analytical results to experimental data from magnetoencephalography (MEG) of human brain. The short time behavior of the autocorrelation function for spontaneous activity is characterized by a double-exponential decay, with two characteristic times, differing by two orders of magnitude. Conversely, the response function exhibits a single exponential decay in agreement with experimental data for evoked activity under visual stimulation. Results suggest that the brain response to weak external stimuli can be predicted from the observation of spontaneous activity and pave the way to controlled experiments on the brain response under different external perturbations.Comment: 9 pages, 8 figure

    Brain response during the M170 time interval is sensitive to socially relevant information

    No full text
    Item does not contain fulltextDeciphering the social meaning of facial displays is a highly complex neurological process. The M170, an event related field component of MEG recording, like its EEG counterpart N170, was repeatedly shown to be associated with structural encoding of faces. However, the scope of information encoded during the M170 time window is still being debated. We investigated the neuronal origin of facial processing of integrated social rank cues (SRCs) and emotional facial expressions (EFEs) during the M170 time interval. Participants viewed integrated facial displays of emotion (happy, angry, neutral) and SRCs (indicated by upward, downward, or straight head tilts). We found that the activity during the M170 time window is sensitive to both EFEs and SRCs. Specifically, highly prominent activation was observed in response to SRC connoting dominance as compared to submissive or egalitarian head cues. Interestingly, the processing of EFEs and SRCs appeared to rely on different circuitry. Our findings suggest that vertical head tilts are processed not only for their sheer structural variance, but as social information. Exploring the temporal unfolding and brain localization of non-verbal cues processing may assist in understanding the functioning of the social rank biobehavioral system

    Fading Memory, Plasticity, and Criticality in Recurrent Networks

    No full text
    Criticality signatures, in the form of power-law distributed neuronal avalanches, have been widely measured in vitro and provide the foundation for the so-called critical brain hypothesis, which proposes that healthy neural circuits operate near a phase transition state with maximal information processing capabilities. Here, we revisit a recently published analysis on the occurrence of those signatures in the activity of a recurrent neural network model that self-organizes due to biologically inspired plasticity rules. Interestingly, the criticality signatures are input dependent: they transiently break down due to onset of random external input, but do not appear under repeating input sequences during learning tasks. Additionally, we show that an important information processing ability, the fading memory time scale, is improved when criticality signatures appear, potentially facilitating complex computations. Taken together, the results suggest that a combination of plasticity mechanisms that improves the network’s spatio-temporal learning abilities and memory time scale also yields power-law distributed neuronal avalanches under particular input conditions, thus suggesting a link between such abilities and avalanche criticality

    Subsampling scaling

    Get PDF
    In real-world applications, observations are often constrained to a small fraction of a system. Such spatial subsampling can be caused by the inaccessibility or the sheer size of the system, and cannot be overcome by longer sampling. Spatial subsampling can strongly bias inferences about a system’s aggregated properties. To overcome the bias, we derive analytically a subsampling scaling framework that is applicable to different observables, including distributions of neuronal avalanches, of number of people infected during an epidemic outbreak, and of node degrees. We demonstrate how to infer the correct distributions of the underlying full system, how to apply it to distinguish critical from subcritical systems, and how to disentangle subsampling and finite size effects. Lastly, we apply subsampling scaling to neuronal avalanche models and to recordings from developing neural networks. We show that only mature, but not young networks follow power-law scaling, indicating self-organization to criticality during development.peerReviewe
    corecore